$\mathcal{H}(2,1^6)$ Extra Branching Occurs Over One Point

This notebook addresses the case where there is one simple zero over each $2$-torsion point, and all three remaining zeros descend to a point in $\{(0,0), (1,0)\}$.

We recall that in $\mathcal{H}(2,1^6)$, $d_{opt} = 36$ and by the arrangement above, there is a $2$-torsion point in the boundary of each horizontal cylinder that has exactly one simple zero above it.

In [1]:
import re
import time
import itertools
from multiprocessing import Pool

Step 1

In [2]:
#This loads all of the 1-cylinder diagrams in H(2,1^4) formatted as python lists

with open('ST5_data//1-cyl_diags//cyl_diags-2_1_1_1_1-c-1', 'r') as file:
    H2_1t4_cyl_diags = eval(file.read())

#This loads all of the functions for processing cylinder diagrams

%run ./ST5_fcns/cyl_diag_fcns.ipynb

#This runs almost instantly and constructs all of the binary lists 
#associated to each simple zero for every cylinder diagram

H2_1t4_vertex_data = strat_odd_sc(H2_1t4_cyl_diags)

There is a single $1$-cylinder diagram in $\mathcal{H}(1,1)$ that completely describes the identification between the top of $C_1$ and the bottom of $C_2$. Moreover, each saddle connection is topologically indistinguishable from every other saddle connection. Therefore, it is unnecessary to consider various cyclic permutations of the saddle connections because the results would be indistinguishable. Since the base torus consists of a $2 \times 2$ torus of unit squares and no simple zero has a saddle connection to itself, the saddle connection lengths are partitions of $72$ into four odd numbers.

See Example 5.9.

Among partitions of $72$ into four odd numbers the minimum of the maximum numbers among all partitions is $d_{opt}/4 = 18$, which is $19$ in this case.

For the partition of $72$ into $11$ numbers, we have that the minimum is given by $\lceil 72/11 \rceil = 7$. However, the minimum value is in fact $8$ because if $7$ were the largest number in the partition, at least six $7$'s would be required. Since no more than $4$ odd numbers are allowed in the partition, one of the numbers in the partition must be at least $8$.

Solving $72 - 2(19) - 2t_0 \geq 0$ implies that the largest value of $t_0$ is $17$.

Solving $72 - 2s_0 - 2(8) \geq 0$ implies that the largest value of $s_0$ is $28$. However, $s_0$ is necessarily odd, which implies $s_0$ is at most $27$.

In summary:

$$s_0 \in \{19, 21, 23, 25, 27\} \text{ and } 8 \leq t_0 \leq 17$$

Step 2

In [3]:
#This loads all of the standard partition functions needed for nearly every case

%run ./ST5_fcns/partition_functions.ipynb
In [4]:
#This is toggled off so that it doesn't overwrite the file every time, but it is very fast to generate

if False:
    create_sc_partition_file((), d_opt = 36, part_length = 4, t0_range = range(19,28), 
                             filename_root = 'ST5_data//H_2_1t6//1_branch_point//partitions//H1t2_part')
In [5]:
#This is harmless and loads the odd partitions of 72 corresponding to the H(1,1) boundary

if False:
    with open('ST5_data//H_2_1t6//1_branch_point//partitions//H1t2_part', 'r') as file:
        H1t2_part = eval(file.read())

This is the only case where the number of partitions is so great that the files need to be segmented to different levels just to get the files down to under 100MB in size, with a median file size of 350KB. By segmenting the partitions it facilitates parallel processing of the partition files such that the largest file takes 5 days, and the large majority are computed in well under 2 hours. We split the partitions based on their starting numbers rather than ensuring equal file sizes. This has the perhaps undesirable effect of producing many empty files. For example, the set of partitions starting with $(16, 3, 4, 2)$ such that every odd number is adjacent to another odd number is of course the empty set. We make the following splits:

8, 9: level 2

10, 11, 15: level 3

12, 13, 14, 16, 17: level 4

In [6]:
def partition_evaluate_list(print_lengths = False):
    '''
    This function is only intended for use in the 1 branch point case in the stratum H(2,1,1,1,1,1,1).
    Therefore, we make no attempt at generalizing it to other strata.
    It simply generates the lists on which to apply the partition function.
    '''
    level_2_lists = [(i,j) for i in [8,9] for j in range(1,i+1)]
    if print_lengths:
        print('level 2 lists have length ' + str(len(level_2_lists)))
    level_3_lists = [(i,j,k) for i in [10,11,15] for j in range(1,i+1) for k in range(1,i+1)]
    if print_lengths:
        print('level 3 lists have length ' + str(len(level_3_lists)))
    level_4_lists = [(i,j,k,l) for i in [12, 13, 14, 16, 17] for j in range(1,i+1) for k in range(1,i+1) for l in range(1,i+1)]
    if print_lengths:
        print('level 4 lists have length ' + str(len(level_4_lists)))
    total_list = level_2_lists + level_3_lists + level_4_lists
    if print_lengths:
        print('total length ' + str(len(total_list)))
    return total_list

dummy_var = partition_evaluate_list(print_lengths = True)
level 2 lists have length 17
level 3 lists have length 446
level 4 lists have length 15678
total length 16141
In [7]:
def partitions_to_make_list():
    available_list = []
    total_possibilities = partition_evaluate_list()
    filename_root = 'ST5_data//H_2_1t6//1_branch_point//partitions//H2_1t4_part_'
    for part_begin in total_possibilities:
        t_list_begin = list(part_begin)
        file_tag = ''
        for t in t_list_begin:
            file_tag += str(t) + '_'
        file_tag = file_tag[:-1]
        part_list_filename = filename_root + file_tag
        try:
            with open(part_list_filename, 'r') as file:
                part_list_string = file.read()
                part_list_not_exists = False
#                print(part_list_not_exists)
        except:
            part_list_not_exists = True
        if part_list_not_exists:
            available_list += [part_begin]
    return available_list

print(len(partition_evaluate_list()))
print(len(partitions_to_make_list()))
16141
0
In [8]:
#WARNING: This takes about a week to run on a single processor
#Options for one or more processors are given
#The partitions in the resulting 16141 files require 30GB of memory

def create_sc_partition_file_pool(part_to_make):
    return create_sc_partition_file(part_to_make, d_opt = 36, part_length = 11,
                                    filename_root = 'ST5_data//H_2_1t6//1_branch_point//partitions//H2_1t4_part_', 
                                    t0_range = range(8,18))

if False:
    pool = Pool(processes=2)
    pool.map(create_sc_partition_file_pool, partitions_to_make_list())

if False:
    #If we only want one processor, this is easier to work with
    for part_to_make in partitions_to_make_list():
        create_sc_partition_file(part_to_make, d_opt = 36, part_length = 11, 
                                 filename_root = 'ST5_data//H_2_1t6//1_branch_point//partitions//H2_1t4_part_',
                                 t0_range = range(8,18))

Step 3 and Step 4

What is unique to the largest stratum in the one branch point case is that if we were to generate all of the align lists below, it would probably require approximately 5-10TB of storage. While a hard drive can handle such sizes, we opt for a more memory efficient approach whereby we construct the align_list directly from each partition and then process it immediately via the Window Lemma. The Window Lemma suffices to reduce the number of cases to the empty set. For every other case, the align_list data will be stored.

We also use a more efficient perm_equal function here than we use in other cases. This takes advantage of the fact that $11$ is an odd number.

Lemma: If a partition of a number is taken into an odd number of numbers with the assumptions above, then the function perm_equal below suffices to find all cyclic permtuations that make the corresponding binary lists equal.

Proof: Since the 1's come in adjacent pairs, there are exactly two sub-tuples of zeros (one of them could be empty) such that one necessarily has odd length and the other even length. Therefore, there is a unique cyclic permutation of any other such binary tuple that allows the zeros to align to yield equality of the tuples.

In [9]:
def perm_equal(a,b):
    #The reason that this function works and does not miss additional equalities is because 11 is odd
    length = len(a)
    for i in range(length):
        b_perm = [b[(j+i)%length] for j in range(length)]
        if a == b_perm:
            return (True,i)
    return (False,None)

def extract_cyl_diag(cyl_diag_plus_vdat, part_red):
    length = len(part_red)
    cyl_bot = cyl_diag_plus_vdat[0][0][0]
    cyl_top = cyl_diag_plus_vdat[0][0][1]
    vert_data = cyl_diag_plus_vdat[1]
    align_list = []
    for i in vert_data:
        equal_perm = perm_equal(i,part_red)
        if equal_perm[0]:
            align = [cyl_bot[(j+equal_perm[1])%length] for j in range(length)]
            align_list.append([align,cyl_top])
#            print(align, i)
    return align_list

def cyl_diag_plus_one_partition_align(part, master_cyl_diags_plus_data):
    total = []
    bin_list = [j%2 for j in part]
    for cyl_diag_plus_vdat in master_cyl_diags_plus_data:
        proc_diag = extract_cyl_diag(cyl_diag_plus_vdat, bin_list)
        if len(proc_diag) > 0:
            total.append((part,proc_diag))
    return total

def cyl_diag_plus_part_data(part_list, master_cyl_diags_plus_data):
    total = []
#    print(str(len(part_list)) + ' partitions to search')
#    count = 0
    for part in part_list:
#        count += 1
        bin_list = [j%2 for j in part]
        for cyl_diag_plus_vdat in master_cyl_diags_plus_data:
            proc_diag = extract_cyl_diag(cyl_diag_plus_vdat, bin_list)
            if len(proc_diag) > 0:
                total.append((part,proc_diag))
#        if count%10000==0:
#            print(str(count) + ' partitions processed', part)
    return total
In [10]:
def visible_sc_check(s1, t_i_part, t_cyl_diag_raw, t_1_top_init, print_inner = False):
    #The partition t_partition and t_cyl_diag are already taken to be filtered so that they line up
    length = len(t_i_part)
    t_cyl_diag = [0,0]
    t_cyl_diag[0] = t_cyl_diag_raw[0]
    t_cyl_diag[1] = t_cyl_diag_raw[1][::-1]
    t_1_top_index = t_cyl_diag[1].index(t_cyl_diag[0][0])
    t_i_top = tuple([t_cyl_diag[1][(i+t_1_top_index)%length] for i in range(length)])
    if print_inner:
        print(t_i_top)
    t_i_top_rel_bot_indices = tuple([t_cyl_diag[0].index(i) for i in t_i_top])
    if print_inner:
        print(t_i_top_rel_bot_indices)
    t_i_top_part = [t_i_part[i] for i in t_i_top_rel_bot_indices]
    if print_inner:
        print(t_i_top_part)
    t_i_top_sc_start_pos = tuple(sorted([(t_i_top[i], (2*s1+t_1_top_init+sum(t_i_top_part[:i]))%72) for i in range(length)], key=lambda x: x[1]))
    #t_i_top_sc_length is the bottom cylinder the second coordinate is the starting position of 
    #saddle connection t_i
    if print_inner:
        print(t_i_top_sc_start_pos)
    t_i_sc_length_shift = tuple([sum(t_i_part[:i]) for i in range(length)])
    if print_inner:
        print(t_i_sc_length_shift)
    for i in enumerate(t_i_sc_length_shift):
        t_i = t_cyl_diag[0][i[0]]
        if print_inner:
            print('t_i', t_i)
        t_i_length = t_i_part[i[0]]
        if print_inner:
            print('t_i_length', t_i_length)
        t_i_orig_start = [t_i_tup for t_i_tup in t_i_top_sc_start_pos if t_i_tup[0] == t_i][0]
        if print_inner:
            print('t_i_orig_start', t_i_orig_start)
        t_i_new_start = (t_i_orig_start[1] + t_i_sc_length_shift[i[0]])%72
        if print_inner:
            print('t_i_new_start', t_i_new_start)
        if t_i_new_start < 2*s1 or t_i_new_start > 72 - 2*t_i_length:
            if print_inner:
                print('False', 2*s1, 72 - 2*t_i_length)
            return False
    return True

def visible_top_init_check(s1, t_i_part, t_cyl_diag):
    t_1_top_init_max = 72-2*s1
    total_t_1_top_init = []
    for t_1_top_init in range(0,t_1_top_init_max+1,2):
        if visible_sc_check(s1, t_i_part, t_cyl_diag, t_1_top_init):
            total_t_1_top_init.append(t_1_top_init)
    if len(total_t_1_top_init) > 0:
        return (s1, t_i_part, t_cyl_diag, total_t_1_top_init)
    return []
In [11]:
def part_list_visible_sc_check(s1, part_list, sub_list_depth):
    #This function is designed to be minimally intensive on RAM
    #The idea is to construct each align list per partition and 
    #only store data that passes the non-visible check
    t_list_begin = part_list[0][:sub_list_depth]
    print(t_list_begin)
    filename_root = 'ST5_data//H_2_1t6//1_branch_point//align_list_visible//H2_1t4_align_list'
    t_1_max = 72-2*s1
    total = []
    count = 0
    part_list_len_str = str(len(part_list))
    for part in part_list:
        align_list = cyl_diag_plus_one_partition_align(part, H2_1t4_vertex_data)
        for i in align_list:
            count += 1
            t_i_part = i[0]
            t_1 = t_i_part[0]
            if t_1 <= t_1_max:
                for t_cyl_diag in i[1]:
                    admissible = visible_top_init_check(s1, t_i_part, t_cyl_diag)
                    if len(admissible) > 0:
                        total.append(admissible)
            if count%100000==0:
                print(count,i)
                print('Processing ' + str(part_list.index(part)+1) + ' of ' + part_list_len_str + ' partitions')
    file_tag = ''
    for t in align_list[0][0][:sub_list_depth]:
        file_tag += str(t)+'_'
    file_tag = file_tag[:-1]
    filename = filename_root + file_tag + '_visible' + str(s1)
    with open(filename, 'w') as file:
        file.write(str(total))
    print(filename + ' written')
    return total

def part_list_visible_sc_check_empty_dummy(s1, t_begin_list):
    #This returns the empty list in an appropriately named file
    filename_root = 'ST5_data//H_2_1t6//1_branch_point//align_list_visible//H2_1t4_align_list'
    file_tag = ''
    for t in t_begin_list:
        file_tag += str(t)+'_'
    file_tag = file_tag[:-1]
    filename = filename_root+file_tag+'_visible'+str(s1)
    with open(filename, 'w') as file:
        file.write(str([]))
    print(filename + ' written')
    return []


def part_list_visible_sc_check_from_file(s1, t_begin_list):
    sub_list_depth = len(t_begin_list)
    filename_root = 'ST5_data//H_2_1t6//1_branch_point//partitions//H2_1t4_part_'
    file_tag = ''
    for t in t_begin_list:
        file_tag += str(t)+'_'
    file_tag = file_tag[:-1]
    part_list_filename = filename_root+file_tag
    with open(part_list_filename, 'r') as file:
        part_list_string = file.read()
        if len(part_list_string) < 5:
            part_list_visible_sc_check_empty_dummy(s1, t_begin_list)
            return []
        part_list = eval_part(part_list_string)
    print(part_list_filename + ' partition list file converted to list type')
    print(str(len(part_list)) + ' partitions to process')
    non_visible_list = part_list_visible_sc_check(s1, part_list, sub_list_depth)
    print(str(len(non_visible_list)) + ' admissible configurations found')
    return (len(non_visible_list), non_visible_list)

def s_max(t0):
    '''[s_max(t0) for t0 in range(8,18)]
    yields [27, 27, 25, 25, 23, 23, 21, 21, 19, 19]'''
    return 27 - 2*int((t0-8)/2)

def check_visible_range(t0, t1_range = None, s0_range = None):
    if s0_range == None:
        s0_range = range(19, s_max(t0) + 1, 2)
    if t1_range == None:
        t1_range = range(1,t0 + 1)
    for t1 in t1_range:
        for s0 in s0_range:
            tic = time.clock()
            part_list_visible_sc_check_from_file(s0, (t0,t1))
            toc = time.clock()
            if toc - tic > 1:
                print(toc - tic)
    return 'The non-visible list has been constructed.'

def visible_range_available_list(s0_range_full = True, align_visible_count_trig = False):
    available_list = []
    total_possibilities = partition_evaluate_list()
    filename_root = 'ST5_data//H_2_1t6//1_branch_point//partitions//H2_1t4_part_'
    align_visible_proc_count = 0
    for part_begin in total_possibilities:
        s_max_val = s_max(part_begin[0])
        if s0_range_full:
            s0_range = range(19,s_max_val+1,2)
        else:
            s0_range = [19]
        t_list_begin = list(part_begin)
        file_tag = ''
        for t in t_list_begin:
            file_tag += str(t) + '_'
        file_tag = file_tag[:-1]
        part_list_filename = filename_root + file_tag
        try:
            with open(part_list_filename, 'r') as file:
                part_list_string = file.read()
                part_list_exists = True
        except:
            part_list_exists = False
        if part_list_exists:
            visible_filename_root = 'ST5_data//H_2_1t6//1_branch_point//align_list_visible//H2_1t4_align_list'
            align_list_visible_not_exists = [True]*len(s0_range)
            for s1 in enumerate(s0_range):
                visible_filename = visible_filename_root + file_tag + '_visible' + str(s1[1])
                try:
                    with open(visible_filename, 'r') as file:
                        align_list_visible_string = file.read()
                        align_list_visible_not_exists[s1[0]] = False
                        align_visible_proc_count += 1
                except:
                    align_list_visible_not_exists[s1[0]] = True
        for s1 in enumerate(s0_range):
            if part_list_exists and align_list_visible_not_exists[s1[0]]:
                available_list += [[s1[1], part_begin]]
    if align_visible_count_trig:
        return align_visible_proc_count
    return available_list

def part_list_visible_sc_check_from_file_for_pool(combine_item):
    return part_list_visible_sc_check_from_file(combine_item[0], combine_item[1])

Combining Files

The following functions do not fall exactly under the category of Step 5 because Step 4 produced the empty set in every case. Therefore, we formally take a union of all of these empty sets to prove that the union of all of the files is indeed the empty set.

In [12]:
def combine_visible_align_list(s0_range_full = False, print_visible_files = False):
    if len(visible_range_available_list(s0_range_full)) > 0 or len(partitions_to_make_list()) > 0:
        warning_tag = '_PARTIAL'
        print('INCOMPLETE FILES: PARTIAL tag added \n')
    else:
        warning_tag = ''
        print('Complete visible_align_list files are available')
    available_list = []
    visible_align_list_join = []
    total_possibilities = partition_evaluate_list()
    filename_root = 'ST5_data//H_2_1t6//1_branch_point//partitions//H2_1t4_part_'
    align_visible_proc_count = 0
    for part_begin in total_possibilities:
        s_max_val = s_max(part_begin[0])
        if s0_range_full:
            s0_range = range(19,s_max_val+1,2)
            non_full_tag = ''
        else:
            s0_range = [19]
            non_full_tag = '_19'
        t_list_begin = list(part_begin)
        file_tag = ''
        for t in t_list_begin:
            file_tag += str(t) + '_'
        file_tag = file_tag[:-1]
        part_list_filename = filename_root + file_tag
        try:
            with open(part_list_filename, 'r') as file:
                part_list_string = file.read()
                part_list_exists = True
        except:
            part_list_exists = False
        if part_list_exists:
            visible_filename_root = 'ST5_data//H_2_1t6//1_branch_point//align_list_visible//H2_1t4_align_list'
            align_list_visible_not_exists = [True]*len(s0_range)
            for s1 in enumerate(s0_range):
                visible_filename = visible_filename_root + file_tag + '_visible' + str(s1[1])
                try:
                    with open(visible_filename, 'r') as file:
                        align_list_visible_string = file.read()
                        align_list_visible_eval = eval(align_list_visible_string)
                        align_list_visible_not_exists[s1[0]] = False
                        align_visible_proc_count += 1
                        if print_visible_files:
                            print(visible_filename)
                        visible_align_list_join += align_list_visible_eval
                except:
                    align_list_visible_not_exists[s1[0]] = True
        for s1 in enumerate(s0_range):
            if part_list_exists and align_list_visible_not_exists[s1[0]]:
                available_list += [[s1[1], part_begin]]
    combine_filename1 = 'ST5_data//H_2_1t6//1_branch_point//H2_1t4_align_list_visible_combine'
    combine_filename2 = non_full_tag + warning_tag
    combine_filename = combine_filename1 + combine_filename2
    with open(combine_filename, 'w') as file:
        file.write(str(visible_align_list_join))
        print(combine_filename + ' written from ' + str(align_visible_proc_count) + ' files')
    return visible_align_list_join
In [13]:
#Write a function to count the number of partitions among all files
def total_partition_count(t0):
    total_possibilities = partition_evaluate_list()
    filename_root = 'ST5_data//H_2_1t6//1_branch_point//partitions//H2_1t4_part_'
    total = 0
    t0_total_possibilities = [j for j in total_possibilities if j[0] == t0]
#    print(len(t0_total_possibilities))
    for part_begin in t0_total_possibilities:
        t_list_begin = list(part_begin)
        file_tag = ''
        for t in t_list_begin:
            file_tag += str(t) + '_'
        file_tag = file_tag[:-1]
        part_list_filename = filename_root + file_tag
        with open(part_list_filename, 'r') as file:
            part_list_str = file.read()
#            print(part_list_filename + ' read')
            if len(part_list_str) < 4:
                part_list_length = 0
            else:
                part_list_length = len(eval_part(part_list_str))
        total += part_list_length
    return total

Counting Cases and Status Checks

Due to the fact that the computation could take months. Several functions are presented below for checking the status of the computations while they are in progress. Since results are stored piecemeal in files, the functions simply read the number of files that are written. We found the simple functions useful for monitoring the status over the course of the several months that this ran.

In [14]:
#This gives a count of the number of partitions and cases that need to be checked
#This is for reference
if False:
    for j in range(8,18):
        print(j, total_partition_count(j))
In [15]:
total_part_lengths = [(8, 124740), (9, 843280), (10, 19416600), (11, 22091040), (12, 106442980), (13, 68120416), (14, 186198040), (15, 92809072), (16, 199020640), (17, 87357616)]
total_parts = sum([j[1] for j in total_part_lengths])
print(total_parts)
print(total_parts*155)
782424424
121275785720

We will only consider windows of size $19$ by Lemma 5.A.1.

In [16]:
#Status Check: Run this to see the progress which does rely on Lemma 5.A.1
print('Progress Report:')
#print(str(len(partitions_to_make_list())) + ' out of ' + str(len(partition_evaluate_list())) + ' partitions to make')
print(str(visible_range_available_list(s0_range_full = False, align_visible_count_trig = True))
      + ' out of ' + str(len(partition_evaluate_list())) + ' files processed with window 19')
print(str(len(visible_range_available_list(s0_range_full = False))) + ' files with window 19 left to process')
Progress Report:
16141 out of 16141 files processed with window 19
0 files with window 19 left to process

This is the line that proves that there are no examples after all of the files have been generated. It can be run at any time, but the tag "PARTIAL" will be added to the resulting filename to indicate that the result is incomplete.

In [17]:
combine_visible_align_list(print_visible_files = False)
Complete visible_align_list files are available
ST5_data//H_2_1t6//1_branch_point//H2_1t4_align_list_visible_combine_19 written from 16141 files
Out[17]:
[]

These are more detailed status check functions

In [18]:
#This stores all of the files in the temporary list so that
#various elements of it can be viewed using the functions
#below without generating it from scratch each time
temp = visible_range_available_list(s0_range_full = False)
In [19]:
#This generates a three column list from the list temp
#Column 1 contains the length of the saddle connection
#Column 2 contains the number of files for that saddle connection left to process
#Column 3 contains the total number of files corresponding to that saddle connection
full_list = partition_evaluate_list()
for t in range(8, 18):
    print(t, len([j for j in temp if j[1][0] == t]), len([j for j in full_list if j[0] == t]))
(8, 0, 8)
(9, 0, 9)
(10, 0, 100)
(11, 0, 121)
(12, 0, 1728)
(13, 0, 2197)
(14, 0, 2744)
(15, 0, 225)
(16, 0, 4096)
(17, 0, 4913)
In [20]:
#This displays all of the elements left to process where the length of the first saddle connection is specified, e.g. 11
#This relies on temp and the result is only as recent as the most recent update of the list temp
temp11list = [j for j in temp if j[1][0] == 11]
print(len(temp11list))
print(temp11list)
0
[]
In [21]:
temp12list = [j for j in temp if j[1][0] == 12]
print(len(temp12list))
print(temp12list)
0
[]
In [22]:
temp14list = [j for j in temp if j[1][0] == 14]
print(len(temp14list))
print(temp14list)
0
[]
In [23]:
temp15list = [j for j in temp if j[1][0] == 15]
print(len(temp15list))
print(temp15list)
0
[]
In [24]:
temp16list = [j for j in temp if j[1][0] == 16]
print(len(temp16list))
print(temp16list)
0
[]
In [25]:
#This makes it easy to monitor the status of specific partitions in the case where printing all of the partitions
#that remain of a given length is too long to print
start_list = [17,4,8]

def list_start_eq(list1, list2):
    for j in enumerate(list2):
        if j[1] != list1[j[0]]:
            return False
    return True

temp17list = [j for j in temp if list_start_eq(j[1],start_list)]
print(len(temp17list))
print(temp17list)
0
[]

These are functions to run the window with size 19 case, the arbitrary length window case, and an individual case.

251 minutes per 100,000 partitions is a rough estimate but because each partitions has a varying number of align lists, this is imprecise. Even more significantly, the processor speed will have a big impact here.

In [ ]:
#This allows the running of the functions for arbitrarily many cores in the window length 19 case
#i.e. This uses Lemma 5.A.1
if False:
    pool = Pool(processes=15)
    pool.map(part_list_visible_sc_check_from_file_for_pool, visible_range_available_list(s0_range_full = False))
In [ ]:
#This allows the running of the functions for arbitrarily many cores in the arbitrary window length case
#i.e. This does not use Lemma 5.A.1
if False:
    pool = Pool(processes=14)
    pool.map(part_list_visible_sc_check_from_file_for_pool, visible_range_available_list())
In [ ]:
#This allows the running of any specific value
#A timer is added for the reference of the useer
part_to_run = [19, (15, 2, 4)]
tic = time.clock()
part_list_visible_sc_check_from_file_for_pool(part_to_run)
toc = time.clock()
print(toc - tic)

Older Commands Not Assuming Lemma 5.A.1

In [ ]:
#Status Check: Run this to see the progress without relying on Lemma 5.A.1
print('Progress Report:')
print(str(len(partitions_to_make_list())) + ' out of ' + str(len(partition_evaluate_list())) + ' partitions to make')
print(str(len(visible_range_available_list())) + ' partitions with windows to process')
print(str(visible_range_available_list(align_visible_count_trig = True)) + ' out of 27691 files processed')

Appendix: Unnecessary Functions

It is natural to try to perform the visible check on the partitions corresponding to the 1-cylinder diagram in H(1,1). However, the search does not return any useful results. In fact, all possible combinations of parameters do not lead to a contradiction. Therefore, we excluded this search altogether. The interested reader can use the function below to see that this is the case. The function is identical to the one used above, but many commands have been simplified due to the particular setting.

In [ ]:
#The following function is completely useless in that it returns the list of all combinatorial possibilities.
#Since it does not result in a reduction, there is no reason to execute it.

def part_list_visible_sc_check_simple(s1, part_list = H1t2_part):
    #This function is designed to be minimally intensive on RAM
    #The idea is to construct each align list per partition and 
    #only store data that passes the non-visible check
    filename_root = 'H1_1_align_list'
    t_1_max = 72-2*s1
    total = []
    count = 0
    part_list_len_str = str(len(part_list))
    for part in part_list:
        count += 1
        t_1 = part[0]
        if t_1 <= t_1_max:
            admissible = visible_top_init_check(s1, part, [[0,1,2,3], [0,1,2,3]])
            if len(admissible) > 0:
                total.append(admissible)
            if count%100000==0:
                print(count,i)
                print('Processing ' + str(part_list.index(part)+1) + ' of ' + part_list_len_str + ' partitions')
    filename = filename_root+'_visible'+str(s1)
    with open(filename, 'w') as file:
        file.write(str(total))
    print(filename + ' written')
    return total
In [ ]:
#At this point, none of the files above need to be loaded for this to run.
#Once the files above are generated, the file sizes are small enough that these can be loaded on the fly.

def align_list_visible_to_window_t0_pair(align_list_visible):
    coord_pair = []
    for config in align_list_visible:
        coord_pair.append((config[0], config[1][0]))
    return coord_pair

def combine_align_list_visible(align_list_visible1, align_list_visible2):
    print(str(len(align_list_visible1)) + '*' + str(len(align_list_visible2)) + '=' 
          +str(len(align_list_visible1)*len(align_list_visible2)) + ' combinations to check')
    coord_pair_list1 = align_list_visible_to_window_t0_pair(align_list_visible1)
    print(align_list_visible1, len(align_list_visible1))
    print(coord_pair_list1, len(coord_pair_list1))
    width = sum(align_list_visible1[0][1])
    print('Coordinate pair list generated for first align_list_visible')
    print(coord_pair_list1)
    total = []
    for coord_pair in enumerate(coord_pair_list1):
        config1 = align_list_visible1[coord_pair[0]]
        for config2 in align_list_visible2:
            if coord_pair[1] == (config2[1][0], config2[0]):
                for s_start in config1[3]:
                    for t_start in config2[3]:
                        check_starts = 2*sum(coord_pair[1]) + s_start + t_start
                        #This is all that needs to be computed to prove that the start numbers align
                        if check_starts == width:
                            print(s_start, t_start)
                            total += [(align_list_visible1[coord_pair[0]][1:3], s_start, config2[1:3])]
                        else:
                            print('Failed check_starts test', config1, config2)
    return total

def combine_align_list_visible_write_file(s_range = range(4,10), 
                                          s_filename_root = 'align_list_visible//H_2_2_1_1_align_list_', 
                                          t_range = range(9,16,2), 
                                          t_filename_root = 'align_list_visible//H1_1_align_list_',
                                          root_dir = 'ST5_data//H_2_2_1t4//1_branch_point//'):
    admissible_list = []
    for s in s_range:
        for t in t_range:
            t_align_list_visible_filename = root_dir + t_filename_root + str(t) + '_visible_' + str(s)
            with open(t_align_list_visible_filename, 'r') as file:
                t_align_list_visible = eval(file.read())
                print(t_align_list_visible_filename + ' read')
            s_align_list_visible_filename = root_dir + s_filename_root + str(s) + '_visible_' + str(t)
            with open(s_align_list_visible_filename, 'r') as file:
                s_align_list_visible = eval(file.read())
                print(s_align_list_visible_filename + ' read')
            if t_align_list_visible == [] or s_align_list_visible == []:
                print('empty list')
                admissible_list += []
            else:
                admissible_list += combine_align_list_visible(t_align_list_visible, s_align_list_visible)
    admissible_list_filename = root_dir + 'admissible_list'
    with open(admissible_list_filename, 'w') as file:
            file.write(str(admissible_list))
    return 'admissible_list written with ' + str(len(admissible_list)) + ' elements'
In [ ]:
if False:
    for s in range(8,18):
        part_list_visible_sc_check_simple(s)